74 research outputs found
Advances on Testing C-Planarity of Embedded Flat Clustered Graphs
We show a polynomial-time algorithm for testing c-planarity of embedded flat
clustered graphs with at most two vertices per cluster on each face.Comment: Accepted at GD '1
Stress-Minimizing Orthogonal Layout of Data Flow Diagrams with Ports
We present a fundamentally different approach to orthogonal layout of data
flow diagrams with ports. This is based on extending constrained stress
majorization to cater for ports and flow layout. Because we are minimizing
stress we are able to better display global structure, as measured by several
criteria such as stress, edge-length variance, and aspect ratio. Compared to
the layered approach, our layouts tend to exhibit symmetries, and eliminate
inter-layer whitespace, making the diagrams more compact
On the Maximum Crossing Number
Research about crossings is typically about minimization. In this paper, we
consider \emph{maximizing} the number of crossings over all possible ways to
draw a given graph in the plane. Alpert et al. [Electron. J. Combin., 2009]
conjectured that any graph has a \emph{convex} straight-line drawing, e.g., a
drawing with vertices in convex position, that maximizes the number of edge
crossings. We disprove this conjecture by constructing a planar graph on twelve
vertices that allows a non-convex drawing with more crossings than any convex
one. Bald et al. [Proc. COCOON, 2016] showed that it is NP-hard to compute the
maximum number of crossings of a geometric graph and that the weighted
geometric case is NP-hard to approximate. We strengthen these results by
showing hardness of approximation even for the unweighted geometric case and
prove that the unweighted topological case is NP-hard.Comment: 16 pages, 5 figure
A New Perspective on Clustered Planarity as a Combinatorial Embedding Problem
The clustered planarity problem (c-planarity) asks whether a hierarchically
clustered graph admits a planar drawing such that the clusters can be nicely
represented by regions. We introduce the cd-tree data structure and give a new
characterization of c-planarity. It leads to efficient algorithms for
c-planarity testing in the following cases. (i) Every cluster and every
co-cluster (complement of a cluster) has at most two connected components. (ii)
Every cluster has at most five outgoing edges.
Moreover, the cd-tree reveals interesting connections between c-planarity and
planarity with constraints on the order of edges around vertices. On one hand,
this gives rise to a bunch of new open problems related to c-planarity, on the
other hand it provides a new perspective on previous results.Comment: 17 pages, 2 figure
Computing NodeTrix Representations of Clustered Graphs
NodeTrix representations are a popular way to visualize clustered graphs;
they represent clusters as adjacency matrices and inter-cluster edges as curves
connecting the matrix boundaries. We study the complexity of constructing
NodeTrix representations focusing on planarity testing problems, and we show
several NP-completeness results and some polynomial-time algorithms. Building
on such algorithms we develop a JavaScript library for NodeTrix representations
aimed at reducing the crossings between edges incident to the same matrix.Comment: Appears in the Proceedings of the 24th International Symposium on
Graph Drawing and Network Visualization (GD 2016
A Distributed Multilevel Force-directed Algorithm
The wide availability of powerful and inexpensive cloud computing services
naturally motivates the study of distributed graph layout algorithms, able to
scale to very large graphs. Nowadays, to process Big Data, companies are
increasingly relying on PaaS infrastructures rather than buying and maintaining
complex and expensive hardware. So far, only a few examples of basic
force-directed algorithms that work in a distributed environment have been
described. Instead, the design of a distributed multilevel force-directed
algorithm is a much more challenging task, not yet addressed. We present the
first multilevel force-directed algorithm based on a distributed vertex-centric
paradigm, and its implementation on Giraph, a popular platform for distributed
graph algorithms. Experiments show the effectiveness and the scalability of the
approach. Using an inexpensive cloud computing service of Amazon, we draw
graphs with ten million edges in about 60 minutes.Comment: Appears in the Proceedings of the 24th International Symposium on
Graph Drawing and Network Visualization (GD 2016
Performance of various homogenization tools on a synthetic benchmark dataset of GPS and ERA-interim IWV differences
Presentación realizada en: IAG-IASPEI 39th Joint Scientific Assembly celebrada en Kobe, Japón, del 30 de julio al 4 de agosto de 2017
Study on homogenization of synthetic GNSS-Retrieved IWV time series and its impact on trend estimates with autoregressive noise
Póster presentado en: EGU General Assembly celebrada del 23 al 28 de abril de 2017 en Viena, Austria.A synthetic benchmark dataset of Integrated Water Vapour (IWV) was created within the activity of “Data homogenisation” of sub-working group WG3 of COST ES1206 Action. The benchmark dataset was created basing on the analysis of IWV differences retrieved by Global Positioning System (GPS) International GNSS Service (IGS) stations using European Centre for Medium-Range Weather Forecats (ECMWF) reanalysis data (ERA-Interim). Having analysed a set of 120 series of IWV differences (ERAI-GPS) derived for IGS stations, we delivered parameters of a number of gaps and breaks for every certain station. Moreover, we estimated values of trends, significant seasonalities and character of residuals when deterministic model was removed. We tested five different noise models and found that a combination of white and autoregressive processes of first order describes the stochastic part with a good accuracy. Basing on this analysis, we performed Monte Carlo simulations of 25 years long data with two different types of noise: white as well as combination of white and autoregressive processes. We also added few strictly defined offsets, creating three variants of synthetic dataset: easy, less complicated and fully complicated. The synthetic dataset we present was used as a benchmark to test various statistical tools in terms of homogenisation task. In this research, we assess the impact of the noise model, trend and gaps on the performance of statistical methods to detect simulated change points
Homogenizing GPS integrated water vapour time series: methodology and benchmarking the algorithms on synthetic datasets
We would like to thank the COST Action ES1206 GNSS4SWEC for financial support
- …